Originally shared by gwern branwen"It’s All About The Benjamins: An empirical study on incentivizing users to ignore security advice", Nicolas Christin et al:
"We examine the cost for an attacker to pay users to execute arbitrary code—potentially malware. We asked users at home to download and run an exe- cutable we wrote without being told what it did and without any way of knowing it was harmless. Each week, we increased the payment amount. Our goal was to examine whether users would ignore common security advice—not to run untrusted executables—if there was a direct incentive, and how much this incentive would need to be. We observed that for payments as low as $0.01, 22% of the people who viewed the task ultimately ran our executable. Once increased to $1.00, this proportion increased to 43%. We show that as the price increased, more and more users who understood the risks ultimately ran the code. We conclude that users are generally unopposed to running programs of unknown provenance, so long as their incentives exceed their inconvenience.
The scale of the problem, however alarming it may seem, suggests that most Internet users whose computers have been turned into bots either do not seem to notice they have been compromised, or are unable or unwilling to take corrective measures, potentially because the cost of taking action outweighs the perceived benefits. There is evidence (see, e.g., [29]) of hosts infected by multiple pieces of malware, which indicates that a large number of users are not willing to undertake a complete reinstallation of their system until it ceases to be usable. In other words, many users seem to be content ignoring possible security compromises as long as the compromised state does not noticeably impact the performance of the machine.
Consequently, bots are likely to be unreliable. For instance, they may frequently crash due to multiple infections of poorly programmed malware. Their economic value to the miscreants using them is in turn fairly low; and indeed, advertised bot prices can be as low as $0.03-$0.04 per bot,1 according to a recent Symantec report [31]. Overall, bot markets are an interesting economic environment: goods are seemingly of low quality, and treachery amongst sellers and buyers is likely rampant [14], as transaction participants are all engaging in illegal commerce. Yet, the large number of bots, and the absence of notable decrease in this number, seem to indicate that bot markets remain relatively thriving. This puzzle is even more complicated by the fact most surveyed Internet users profess a strong desire to have secure systems [5]. In this paper, we demonstrate that, far from being consistent with their stated preferences, in practice, users actually do not attach any significant economic value to the security of their systems.
We describe an experiment that we conducted using Amazon’s Mechanical Turk, where we asked users to download and run our “Distributed Computing Client” for one hour, with little explanation as to what the software actually did,2 in exchange for a nominal payment, ranging from $0.01 to $1.00.
Closer to the research on which we report in this paper, Grossklags and Acquisti provide quantitative estimates of how much people value their privacy. They show that most people are likely to give up considerable private information in exchange for $0.25 [11]. Similarly, Good et al. conducted a laboratory study wherein they observed whether participants would install potentially risky software with little functional benefit [10]. Participants overwhelmingly decided to affirmatively install peer-to-peer and desktopenhancing software bundled with spyware. Most individuals only showed regret about their actions when presented with much more explicit notice and consent documents in a debriefing session. To most participants, the value extracted from the software (access to free music or screen savers) trumped the potentially dangerous security compromises and privacy invasions they had facilitated.
In September of 2010, we created a Mechanical Turk task offering workers the opportunity to “get paid to do nothing.” Only after accepting our task did participants see a detailed description: they would be participating in a research study on the “CMU Distributed Computing Project,” a fictitious project that we created. As part of this, we instructed participants to download a program and run it for an hour (Figure 1). We did not say what the application did. After an hour elapsed, the program displayed a code, which participants could submit to Mechanical Turk in order to claim their payment. Because this study involved human subjects, we required Institutional Review Board (IRB) approval. We could have received a waiver of consent so that we would not be required to inform participants that they were participating in a research study. However, we were curious if—due to the pervasiveness of research tasks on Mechanical Turk— telling participants that this was indeed a research task would be an effective recruitment strategy. Thus, all participants were required to click through a consent form. Beyond the consent form, there was no evidence that they were participating in a research study; all data collection and downloads came from a third-party privately-registered domain, and the task was posted from a personal Mechanical Turk account not linked to an institutional address. No mention of the “CMU Distributed Computing Project” appeared on any CMU websites. Thus, it was completely possible that an adversary had posted a task to trick users into downloading malware under the guise of participating in a research study, using a generic consent form and fictitious project names in furtherance of the ruse.
We reposted the task to Mechanical Turk every week for five weeks. We increased the price each subsequent week to examine how our results changed based on the offering price. Thus, the first week we paid participants $0.01, the second week $0.05, the third week $0.10, the fourth week $0.50, and the fifth week $1.00. In order to preserve data independence and the between-subjects nature of our experiment, we informed participants that they could not participate more than once. We enforced this by rejecting results from participants who had already been compensated in a previous week. We further restricted our task to users of Microsoft Windows XP or later (i.e., XP, Vista, or 7).3 In Windows Vista and later, a specific security mitigation is included to dissuade users from executing programs that require administrator-level access: User Account Control (UAC). When a program is executed that requires such access, a prompt is displayed to inform the user that such access is requested, and asks the user if such access should be allowed (Figure 2). To examine the effectiveness of this warning, we created a second between-subjects condition. For each download, there was a 50% chance that the participant would download a version of our software that requested administrator-level access via the application manifest, which in turn would cause the UAC warning to be displayed prior to execution.
During the course of our five-week study, our task was viewed 2,854 times. This corresponds to 1,714 downloads, and 965 confirmed executions. We found that the proportion of participants who executed the program significantly increased with price, though even for a payment of $0.01, 22% of the people who viewed the task downloaded and executed the program (Table 1). This raises questions about the effectiveness of wellknown security advice
Additionally, we detected VMware Tools running on an additional fifteen machines (sixteen in total), and Parallels Tools running on a single machine. Thus, we can confirm that at least seventeen participants (1.8% of 965) took the precaution of using a VM to execute our code.5 Eleven of these participants were in the $1.00 condition, five were in the $0.50 condition, and one was in the $0.01 condition...We do not know if this was a precaution or not. It is equally likely that participants were simply using VMs because they were Mac or Linux users who wanted to complete a Windows task.
[Suggests the Nigerian email spam strategy: don't even try to attack the smart people but go after the dumb ones, who'll root their machines with random executables for a penny.]
All in all, we found no significant differences between the pricing conditions with regard to malware infections or the use of security software; at least 16.4% (158 of the 965 who ran the program) had a malware infection, whereas as many as 79.4% had security software running (766 of 965). Surprisingly, we noticed a significant positive trend between malware infections and security software usage (φ = 0.066, p < 0.039). That is, participants with security software were more likely to also have malware infections (17.6% of 766), whereas those without security software were less likely to have malware infections (11.6% of 199). While counterintuitive, this may indicate that users tend to exhibit risky behavior when they have security software installed, because they blindly trust the software to fully protect them.
We gauged participants’ security expertise with four questions:
– Do you know any programming languages?
– Is computer security one of your main job duties?
– Have you attended a computer security conference or class in the past year?
– Do you have a degree in computer science or another technical field (e.g. electrical engineering, computer engineering, etc.)?
We observed no differences between the conditions with regard to the individual conditions nor when we created a “security expert score” based on the combination of these factors. That is, the proportion of people who answered affirmatively to each of the questions remained constant across the price intervals.
We asked participants to rate the danger of running programs downloaded from Mechanical Turk using a 5-point Likert scale. What we found was that as the price went up, participants’ perceptions of the danger also increased (F508,4 = 3.165, p < 0.014). Upon performing post-hoc analysis, we observed that risk perceptions were similar between the $0.01, $0.05, and $0.10 price points (μ = 2.43, σ = 1.226), as well as between the $0.50 and $1.00 price points (μ = 2.92, μ = 1.293); once the price hit $0.50 and above, participants had significantly higher risk perceptions (t511 = 3.378, p < 0.001). This indicates that as the payment was increased, people who “should have known better” were enticed by the payment; individuals with higher risk perceptions and awareness of good security practices did not take the risk when the payment was set at the lower points, but ignored their concerns and ran the program once the payment was increased to $0.50.
We periodically browsed Mechanical Turk user forums such as Turker Nation [2] and Turkopticon [3], and noticed with interest that our task was ranked as legitimate because we were paying users on time and anti-virus software was not complaining about our program. Clearly, neither of these statements is correlated in any way to potential security breaches caused by our software. Anti-virus software generally relies on malware being added to blacklists and may not complain about applications that were given full administrative access by a consenting user. Perhaps a warning to the effect that the application is sending network traffic would pop up, but most users would dismiss it as it is consistent with the purpose of a “Distributed Computing Client.” As such, it appears that people are reinforcing their false sense of security with justifications that are not relevant to the problem at hand.
Last, one Turkopticon user also had an interesting comment. He incorrectly believed we were “running a port-scanner,” and was happy with providing us with potentially private information in exchange for the payment he received. This echoes Grossklags and Acquisti’s finding [11] that people do not value privacy significantly.
Even though around 70% of all our survey participants understood that it was dangerous to run unknown programs downloaded from the Internet, all of them chose to do so once we paid them."
www.andrew.cmu.edu/user/nicolasc/publications/CEVG-FC11.pdf